235 research outputs found

    Tunable optical Aharonov-Bohm effect in a semiconductor quantum ring

    Full text link
    By applying an electric field perpendicular to a semiconductor quantum ring we show that it is possible to modify the single particle wave function between quantum dot (QD)-like to ring-like. The constraints on the geometrical parameters of the quantum ring to realize such a transition are derived. With such a perpendicular electric field we are able to tune the Aharanov-Bohm (AB) effect for both single particles and for excitons. The tunability is in both the strength of the AB-effect as well as in its periodicity. We also investigate the strain induce potential inside the self assembled quantum ring and the effect of the strain on the AB effect

    Are the Tails of Percolation Thresholds Gaussians ?

    Full text link
    The probability distribution of percolation thresholds in finite lattices were first believed to follow a normal Gaussian behaviour. With increasing computer power and more efficient simulational techniques, this belief turned to a stretched exponential behaviour, instead. Here, based on a further improvement of Monte Carlo data, we show evidences that this question is not yet answered at all.Comment: 7 pages including 3 figure

    The Tails of the Crossing Probability

    Full text link
    The scaling of the tails of the probability of a system to percolate only in the horizontal direction πhs\pi_{hs} was investigated numerically for correlated site-bond percolation model for q=1,2,3,4q=1,2,3,4.We have to demonstrate that the tails of the crossing probability far from the critical point have shape πhs(p)Dexp(cL[ppc]ν)\pi_{hs}(p) \simeq D \exp(c L[p-p_{c}]^{\nu}) where ν\nu is the correlation length index, p=1exp(β)p=1-\exp(-\beta) is the probability of a bond to be closed. At criticality we observe crossover to another scaling πhs(p)Aexp(bL[ppc]νz)\pi_{hs}(p) \simeq A \exp (-b {L [p-p_{c}]^{\nu}}^{z}). Here zz is a scaling index describing the central part of the crossing probability.Comment: 20 pages, 7 figures, v3:one fitting procedure is changed, grammatical change

    Dual-Camera Joint Deblurring-Denoising

    Full text link
    Recent image enhancement methods have shown the advantages of using a pair of long and short-exposure images for low-light photography. These image modalities offer complementary strengths and weaknesses. The former yields an image that is clean but blurry due to camera or object motion, whereas the latter is sharp but noisy due to low photon count. Motivated by the fact that modern smartphones come equipped with multiple rear-facing camera sensors, we propose a novel dual-camera method for obtaining a high-quality image. Our method uses a synchronized burst of short exposure images captured by one camera and a long exposure image simultaneously captured by another. Having a synchronized short exposure burst alongside the long exposure image enables us to (i) obtain better denoising by using a burst instead of a single image, (ii) recover motion from the burst and use it for motion-aware deblurring of the long exposure image, and (iii) fuse the two results to further enhance quality. Our method is able to achieve state-of-the-art results on synthetic dual-camera images from the GoPro dataset with five times fewer training parameters compared to the next best method. We also show that our method qualitatively outperforms competing approaches on real synchronized dual-camera captures.Comment: Project webpage: http://shekshaa.github.io/Joint-Deblurring-Denoising

    A Compact Linear Programming Relaxation for Binary Sub-modular MRF

    Full text link
    We propose a novel compact linear programming (LP) relaxation for binary sub-modular MRF in the context of object segmentation. Our model is obtained by linearizing an l1+l_1^+-norm derived from the quadratic programming (QP) form of the MRF energy. The resultant LP model contains significantly fewer variables and constraints compared to the conventional LP relaxation of the MRF energy. In addition, unlike QP which can produce ambiguous labels, our model can be viewed as a quasi-total-variation minimization problem, and it can therefore preserve the discontinuities in the labels. We further establish a relaxation bound between our LP model and the conventional LP model. In the experiments, we demonstrate our method for the task of interactive object segmentation. Our LP model outperforms QP when converting the continuous labels to binary labels using different threshold values on the entire Oxford interactive segmentation dataset. The computational complexity of our LP is of the same order as that of the QP, and it is significantly lower than the conventional LP relaxation

    Superpixel quality in microscopy images: the impact of noise & denoising

    Get PDF
    Microscopy is a valuable imaging tool in various biomedical research areas. Recent developments have made high resolution acquisition possible within a relatively short time. State-of-the-art imaging equipment such as serial block-face electron microscopes acquire gigabytes of data in a matter of hours. In order to make these amounts of data manageable, a more data-efficient representation is required. A popular approach for such data efficiency are superpixels which are designed to cluster homogeneous regions without crossing object boundaries. The use of superpixels as a pre-processing step has shown significant improvements in making computationally intensive computer vision analysis algorithms more tractable on large amounts of data. However, microscopy datasets in particular can be degraded by noise and most superpixel algorithms do not take this artifact into account. In this paper, we give a quantitative and qualitative comparison of superpixels generated on original and denoised images. We show that several advanced superpixel techniques are hampered by noise artifacts and require denoising and parameter tuning as a pre-processing step. The evaluation is performed on the Berkeley segmentation dataset as well as on fluorescence and scanning electron microscopy data

    Watch Your Steps: Local Image and Scene Editing by Text Instructions

    Full text link
    Denoising diffusion models have enabled high-quality image generation and editing. We present a method to localize the desired edit region implicit in a text instruction. We leverage InstructPix2Pix (IP2P) and identify the discrepancy between IP2P predictions with and without the instruction. This discrepancy is referred to as the relevance map. The relevance map conveys the importance of changing each pixel to achieve the edits, and is used to to guide the modifications. This guidance ensures that the irrelevant pixels remain unchanged. Relevance maps are further used to enhance the quality of text-guided editing of 3D scenes in the form of neural radiance fields. A field is trained on relevance maps of training views, denoted as the relevance field, defining the 3D region within which modifications should be made. We perform iterative updates on the training views guided by rendered relevance maps from the relevance field. Our method achieves state-of-the-art performance on both image and NeRF editing tasks. Project page: https://ashmrz.github.io/WatchYourSteps/Comment: Project page: https://ashmrz.github.io/WatchYourSteps

    A Graph Theoretic Approach for Object Shape Representation in Compositional Hierarchies Using a Hybrid Generative-Descriptive Model

    Full text link
    A graph theoretic approach is proposed for object shape representation in a hierarchical compositional architecture called Compositional Hierarchy of Parts (CHOP). In the proposed approach, vocabulary learning is performed using a hybrid generative-descriptive model. First, statistical relationships between parts are learned using a Minimum Conditional Entropy Clustering algorithm. Then, selection of descriptive parts is defined as a frequent subgraph discovery problem, and solved using a Minimum Description Length (MDL) principle. Finally, part compositions are constructed by compressing the internal data representation with discovered substructures. Shape representation and computational complexity properties of the proposed approach and algorithms are examined using six benchmark two-dimensional shape image datasets. Experiments show that CHOP can employ part shareability and indexing mechanisms for fast inference of part compositions using learned shape vocabularies. Additionally, CHOP provides better shape retrieval performance than the state-of-the-art shape retrieval methods.Comment: Paper : 17 pages. 13th European Conference on Computer Vision (ECCV 2014), Zurich, Switzerland, September 6-12, 2014, Proceedings, Part III, pp 566-581. Supplementary material can be downloaded from http://link.springer.com/content/esm/chp:10.1007/978-3-319-10578-9_37/file/MediaObjects/978-3-319-10578-9_37_MOESM1_ESM.pd

    Reconstructive Latent-Space Neural Radiance Fields for Efficient 3D Scene Representations

    Full text link
    Neural Radiance Fields (NeRFs) have proven to be powerful 3D representations, capable of high quality novel view synthesis of complex scenes. While NeRFs have been applied to graphics, vision, and robotics, problems with slow rendering speed and characteristic visual artifacts prevent adoption in many use cases. In this work, we investigate combining an autoencoder (AE) with a NeRF, in which latent features (instead of colours) are rendered and then convolutionally decoded. The resulting latent-space NeRF can produce novel views with higher quality than standard colour-space NeRFs, as the AE can correct certain visual artifacts, while rendering over three times faster. Our work is orthogonal to other techniques for improving NeRF efficiency. Further, we can control the tradeoff between efficiency and image quality by shrinking the AE architecture, achieving over 13 times faster rendering with only a small drop in performance. We hope that our approach can form the basis of an efficient, yet high-fidelity, 3D scene representation for downstream tasks, especially when retaining differentiability is useful, as in many robotics scenarios requiring continual learning
    corecore